Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 111
Filter
Add filters

Journal
Document Type
Year range
1.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12465, 2023.
Article in English | Scopus | ID: covidwho-20245449

ABSTRACT

The coronavirus disease 2019 (COVID-19) pandemic had a major impact on global health and was associated with millions of deaths worldwide. During the pandemic, imaging characteristics of chest X-ray (CXR) and chest computed tomography (CT) played an important role in the screening, diagnosis and monitoring the disease progression. Various studies suggested that quantitative image analysis methods including artificial intelligence and radiomics can greatly boost the value of imaging in the management of COVID-19. However, few studies have explored the use of longitudinal multi-modal medical images with varying visit intervals for outcome prediction in COVID-19 patients. This study aims to explore the potential of longitudinal multimodal radiomics in predicting the outcome of COVID-19 patients by integrating both CXR and CT images with variable visit intervals through deep learning. 2274 patients who underwent CXR and/or CT scans during disease progression were selected for this study. Of these, 946 patients were treated at the University of Pennsylvania Health System (UPHS) and the remaining 1328 patients were acquired at Stony Brook University (SBU) and curated by the Medical Imaging and Data Resource Center (MIDRC). 532 radiomic features were extracted with the Cancer Imaging Phenomics Toolkit (CaPTk) from the lung regions in CXR and CT images at all visits. We employed two commonly used deep learning algorithms to analyze the longitudinal multimodal features, and evaluated the prediction results based on the area under the receiver operating characteristic curve (AUC). Our models achieved testing AUC scores of 0.816 and 0.836, respectively, for the prediction of mortality. © 2023 SPIE.

2.
Conference on Human Factors in Computing Systems - Proceedings ; 2023.
Article in English | Scopus | ID: covidwho-20244856

ABSTRACT

Children are one of the groups most influenced by COVID-19-related social distancing, and a lack of contact with peers can limit their opportunities to develop social and collaborative skills. However, remote socialization and collaboration as an alternative approach is still a great challenge for children. This paper presents MR.Brick, a Mixed Reality (MR) educational game system that helps children adapt to remote collaboration. A controlled experimental study involving 24 children aged six to ten was conducted to compare MR.Brick with the traditional video game by measuring their social and collaborative skills and analyzing their multi-modal playing behaviours. The results showed that MR.Brick was more conducive to children's remote collaboration experience than the traditional video game. Given the lack of training systems designed for children to collaborate remotely, this study may inspire interaction design and educational research in related fields. © 2023 ACM.

3.
International IEEE/EMBS Conference on Neural Engineering, NER ; 2023-April, 2023.
Article in English | Scopus | ID: covidwho-20243641

ABSTRACT

This study proposes a graph convolutional neural networks (GCN) architecture for fusion of radiological imaging and non-imaging tabular electronic health records (EHR) for the purpose of clinical event prediction. We focused on a cohort of hospitalized patients with positive RT-PCR test for COVID-19 and developed GCN based models to predict three dependent clinical events (discharge from hospital, admission into ICU, and mortality) using demographics, billing codes for procedures and diagnoses and chest X-rays. We hypothesized that the two-fold learning opportunity provided by the GCN is ideal for fusion of imaging information and tabular data as node and edge features, respectively. Our experiments indicate the validity of our hypothesis where GCN based predictive models outperform single modality and traditional fusion models. We compared the proposed models against two variations of imaging-based models, including DenseNet-121 architecture with learnable classification layers and Random Forest classifiers using disease severity score estimated by pre-trained convolutional neural network. GCN based model outperforms both imaging-only methods. We also validated our models on an external dataset where GCN showed valuable generalization capabilities. We noticed that edge-formation function can be adapted even after training the GCN model without limiting application scope of the model. Our models take advantage of this fact for generalization to external data. © 2023 IEEE.

4.
EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference ; : 2141-2155, 2023.
Article in English | Scopus | ID: covidwho-20242792

ABSTRACT

Memes can sway people's opinions over social media as they combine visual and textual information in an easy-to-consume manner. Since memes instantly turn viral, it becomes crucial to infer their intent and potentially associated harmfulness to take timely measures as needed. A common problem associated with meme comprehension lies in detecting the entities referenced and characterizing the role of each of these entities. Here, we aim to understand whether the meme glorifies, vilifies, or victimizes each entity it refers to. To this end, we address the task of role identification of entities in harmful memes, i.e., detecting who is the 'hero', the 'villain', and the 'victim' in the meme, if any. We utilize HVVMemes - a memes dataset on US Politics and Covid-19 memes, released recently as part of the CONSTRAINT@ACL-2022 shared-task. It contains memes, entities referenced, and their associated roles: hero, villain, victim, and other. We further design VECTOR (Visual-semantic role dEteCToR), a robust multi-modal framework for the task, which integrates entity-based contextual information in the multi-modal representation and compare it to several standard unimodal (text-only or image-only) or multi-modal (image+text) models. Our experimental results show that our proposed model achieves an improvement of 4% over the best baseline and 1% over the best competing stand-alone submission from the shared-task. Besides divulging an extensive experimental setup with comparative analyses, we finally highlight the challenges encountered in addressing the complex task of semantic role labeling within memes. © 2023 Association for Computational Linguistics.

5.
ICRTEC 2023 - Proceedings: IEEE International Conference on Recent Trends in Electronics and Communication: Upcoming Technologies for Smart Systems ; 2023.
Article in English | Scopus | ID: covidwho-20241494

ABSTRACT

In recent years, there has been a significant growth in the development of machine learning algorithms towards better experience in patient care. In this paper, a contemporary survey on the deep learning and machine learning techniques used in multimodal signal processing for biomedical applications is presented. Specifically, an overview of the preprocessing approaches and the algorithms proposed for five major biomedical applications are presented, namely detection of cardiovascular diseases, retinal disease detection, stress detection, cancer detection and COVID-19 detection. In each case, processing on each multimodal data type, such as an image or a text is discussed in detail. A list of various publicly available datasets for each of these applications is also presented. © 2023 IEEE.

6.
EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference ; : 178-188, 2023.
Article in English | Scopus | ID: covidwho-20238781

ABSTRACT

We introduce a new benchmark, COVID-VTS, for fact-checking multi-modal information involving short-duration videos with COVID19-focused information from both the real world and machine generation. We propose, TwtrDetective, an effective model incorporating cross-media consistency checking to detect token-level malicious tampering in different modalities, and generate explanations. Due to the scarcity of training data, we also develop an efficient and scalable approach to automatically generate misleading video posts by event manipulation or adversarial matching. We investigate several state-of-the-art models and demonstrate the superiority of TwtrDetective. © 2023 Association for Computational Linguistics.

7.
Qualitative Research ; : 1, 2023.
Article in English | Academic Search Complete | ID: covidwho-20236911

ABSTRACT

This article reflects on collaborative research carried out during the COVID-19 pandemic involving indigenous youth co-investigators from different urban settings in Bolivia and a UK- and Bolivia-based research coordination team. Unlike previous studies that highlight the potential of generating a shared co-presence via virtual engagements and digital methods when face-to-face interactions seem less desirable, this article offers a more cautious account. We question the existence of a shared co-presence and, instead, posit co-presence as fragmented and not necessarily mutual, requiring careful engagement with power imbalances, distinct socio-economic and space-time positionings, and diverse priorities around knowledge generation among team members. These considerations led us to iteratively configure a hybrid research approach that combines synchronous and asynchronous virtual and face-to-face interactions with multi-modal methods. We demonstrate how this approach enabled us to generate a sense of co-presence in a context where collaborator access to a shared space-time was limited, differentiated, or displaced. [ FROM AUTHOR] Copyright of Qualitative Research is the property of Sage Publications Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full . (Copyright applies to all s.)

8.
IEEE Transactions on Molecular, Biological, and Multi-Scale Communications ; : 1-1, 2023.
Article in English | Scopus | ID: covidwho-20236340

ABSTRACT

Airborne pathogen transmission mechanisms play a key role in the spread of infectious diseases such as COVID-19. In this work, we propose a computational fluid dynamics (CFD) approach to model and statistically characterize airborne pathogen transmission via pathogen-laden particles in turbulent channels from a molecular communication viewpoint. To this end, turbulent flows induced by coughing and the turbulent dispersion of droplets and aerosols are modeled by using the Reynolds-averaged Navier-Stokes equations coupled with the realizable k-model and the discrete random walk model, respectively. Via simulations realized by a CFD simulator, statistical data for the number of received particles are obtained. These data are post-processed to obtain the statistical characterization of the turbulent effect in the reception and to derive the probability of infection. Our results reveal that the turbulence has an irregular effect on the probability of infection, which shows itself by the multi-modal distribution as a weighted sum of normal and Weibull distributions. Furthermore, it is shown that the turbulent MC channel is characterized via multi-modal, i.e., sum of weighted normal distributions, or stable distributions, depending on the air velocity. Crown

9.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12465, 2023.
Article in English | Scopus | ID: covidwho-20234381

ABSTRACT

Although many AI-based scientific works regarding chest X-ray (CXR) interpretation focused on COVID-19 diagnosis, fewer papers focused on other relevant tasks, like severity estimation, deterioration, and prognosis. The same holds for explainable decisions to estimate COVID-19 prognosis as well. The international hackathon launched during Dubai Expo 2020, aimed at designing machine learning solutions to help physicians formulate COVID-19 patients' prognosis, was the occasion to develop a machine learning model capable of predicting such prognoses and justifying them through interpretable explanations. The large hackathon dataset comprised subjects characterized by their CXR and numerous clinical features collected during triage. To calculate the prognostic value, our model considered both patients' CXRs and clinical features. After automatic pre-processing to improve their quality, CXRs were processed by a Deep Learning model to estimate the lung compromise degree, which has been considered as an additional clinical feature. Original clinical parameters suffered from missing values that were adequately handled. We trained and evaluated multiple models to find the best one and fine-tune it before the inference process. Finally, we produced novel explanations, both visual and numerical, to justify the model predictions. Ultimately, our model processes a CXR and several clinical data to estimate a patient's prognosis related to the COVID-19 disease. It proved to be accurate and was ranked second in the final rankings with 75%, 73.9%, and 74.4% in sensitivity, specificity, and balanced accuracy, respectively. In terms of model explainability, it was ranked first since it was agreed to be the most interpretable by health professionals. © 2023 SPIE.

10.
Proceedings of the 17th INDIACom|2023 10th International Conference on Computing for Sustainable Global Development, INDIACom 2023 ; : 1473-1477, 2023.
Article in English | Scopus | ID: covidwho-20233074

ABSTRACT

Ovarian cancers are the most prevalent cancers with the highest mortality among women. Most women with advanced stages require multimodal therapy, including surgery, radiotherapy, and chemotherapy. The advent of the coronavirus disease in the 2019 has affected the entire system of healthcare delivery in majority of patients suffering from cancer. During these tough times, patients suffering from ovarian cancer face mental trauma, which involves delays in diagnosis and prognosis, surgeries, chemotherapy, and radiotherapy. Instead of in-person visits, tele consultations were performed with a fear of being infected with the pandemic. This review, have prioritized the repercussions of COVID-19 on patients with ovarian cancer, Monitoring of CA125 trend in patients of ovarian cancer with COVID-19 and how COVID-19 affects the rate of mortality in cancer patients. © 2023 Bharati Vidyapeeth, New Delhi.

11.
7th IEEE World Engineering Education Conference, EDUNINE 2023 ; 2023.
Article in English | Scopus | ID: covidwho-2324759

ABSTRACT

As a result of the COVID-19 pandemic, an unprecedented crisis has been generated in all areas of life. In education, this emergency led to the massive closure of face-to-face activities in educational institutions of all levels to prevent the spread of the virus and mitigate its impact. In response to this situation, nine public research centers of the National Council of Science and Technology (CONACYT) of Mexico, CIDETEQ, CIATEJ, CIATEQ, CIESAS, CIMAV, CIO, CIQA, COMIMSA, and ECOSUR, proposed a project through which a set of virtual laboratories would be established, both for teaching and to improve their technological infrastructure for teaching. The project would take advantage of the tools of Information and Communications Technologies (ICTs) that allow remote access, which is the main challenge to develop educational tools that provide for distance education, and give continuity to graduate programs. © 2023 IEEE.

12.
2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2022 ACM International Symposium on Wearable Computers, UbiComp/ISWC 2022 ; : 500-502, 2022.
Article in English | Scopus | ID: covidwho-2326694

ABSTRACT

Mental health is a critical societal issue and early screening is vital to enabling timely treatment. The rise of text-based communications provides new modalities that can be used to passively screen for mental illnesses. In this paper we present an approach to screen for anxiety and depression through reply latency of text messages. We demonstrate that by constructing machine learning models with reply latency features. Our models screen for anxiety with a balanced accuracy of 0.62 and F1 of 0.73, a notable improvement over prior approaches. With the same participants, our models likewise screen for depression with a balanced accuracy of 0.70 and F1 of 0.80. We additionally compare these results to those of models trained on data collected prior to the COVID-19 pandemic. Finally, we demonstrate generalizability for screening by combining datasets which results in comparable accuracy. Latency features could thus be useful in multimodal mobile mental illness screening. © 2022 ACM.

13.
Computer Vision, Eccv 2022, Pt Xxxvii ; 13697:327-347, 2022.
Article in English | Web of Science | ID: covidwho-2311737

ABSTRACT

Video conferencing, which includes both video and audio content, has contributed to dramatic increases in Internet traffic, as the COVID-19 pandemic forced millions of people to work and learn from home. Global Internet traffic of video conferencing has dramatically increased Because of this, efficient and accurate video quality tools are needed to monitor and perceptually optimize telepresence traffic streamed via Zoom, Webex, Meet, etc.. However, existing models are limited in their prediction capabilities on multi-modal, live streaming telepresence content. Here we address the significant challenges of Telepresence Video Quality Assessment (TVQA) in several ways. First, we mitigated the dearth of subjectively labeled data by collecting similar to 2k telepresence videos from different countries, on which we crowdsourced similar to 80k subjective quality labels. Using this new resource, we created a first-of-a-kind online video quality prediction framework for live streaming, using a multi-modal learning framework with separate pathways to compute visual and audio quality predictions. Our all-in-one model is able to provide accurate quality predictions at the patch, frame, clip, and audiovisual levels. Our model achieves state-of-the-art performance on both existing quality databases and our new TVQA database, at a considerably lower computational expense, making it an attractive solution for mobile and embedded systems.

14.
Journal of Intelligent & Fuzzy Systems ; 44(3):3501-3513, 2023.
Article in English | Web of Science | ID: covidwho-2310131

ABSTRACT

COVID-19 (Coronavirus Disease of 2019) is one of the most challenging healthcare crises of the twenty-first century. The pandemic causes many negative impacts on all aspects of life and livelihoods. Although recent developments of relevant vaccines, such as Pfizer/BioNTechmRNA, AstraZeneca, or Moderna, the emergence of newvirus mutations and their fast infection rate yet pose significant threats to public health. In this context, early detection of the disease is an important factor to reduce its effect and quickly control the spread of pandemic. Nevertheless, many countries still rely on methods that are either expensive and time-consuming (i.e., Reverse-transcription polymerase chain reaction) or uncomfortable and difficult for self-testing (i.e., Rapid Antigen Test Nasal). Recently, deep learning methods have been proposed as a potential solution for COVID-19 analysis. However, previous works usually focus on a single symptom, which can omit critical information for disease diagnosis. Therefore, in this study, we propose a multi-modal method to detect COVID-19 using cough sounds and self-reported symptoms. The proposed method consists of five neural networks to deal with different input features, including CNN-biLSTM for MFCC features, EfficientNetV2 for Mel spectrogram images, MLP for self-reported symptoms, C-YAMNet for cough detection, and RNNoise for noise-canceling. Experimental results demonstrated that our method outperformed the other state-of-the-art methods with a high AUC, accuracy, and F1-score of 98.6%, 96.9%, and 96.9% on the testing set.

15.
IEEE Transactions on Mobile Computing ; 22(5):2551-2568, 2023.
Article in English | Scopus | ID: covidwho-2306810

ABSTRACT

Multi-modal sensors on mobile devices (e.g., smart watches and smartphones) have been widely used to ubiquitously perceive human mobility and body motions for understanding social interactions between people. This work investigates the correlations between the multi-modal data observed by mobile devices and social closeness among people along their trajectories. To close the gap between cyber-world data distances and physical-world social closeness, this work quantifies the cyber distances between multi-modal data. The human mobility traces and body motions are modeled as cyber signatures based on ambient Wi-Fi access points and accelerometer data observed by mobile devices that explicitly indicate the mobility similarity and movement similarity between people. To verify the merits of modeled cyber distances, we design the localization-free CybeR-physIcal Social dIStancing (CRISIS) system that detects if two persons are physically non-separate (i.e., not social distancing) due to close social interactions (e.g., taking similar mobility traces simultaneously or having a handshake with physical contact). Extensive experiments are conducted in two small-scale environments and a large-scale environment with different densities of Wi-Fi networks and diverse mobility and movement scenarios. The experimental results indicate that our approach is not affected by uncertain environmental conditions and human mobility with an overall detection accuracy of 98.41% in complex mobility scenarios. Furthermore, extensive statistical analysis based on 2-dimensional (2D) and 3-dimensional (3D) mobility datasets indicates that the proposed cyber distances are robust and well-synchronized with physical proximity levels. © 2002-2012 IEEE.

16.
30th ACM International Conference on Multimedia, MM 2022 ; : 7386-7388, 2022.
Article in English | Scopus | ID: covidwho-2302949

ABSTRACT

The fifth ACM International Workshop on Multimedia Content Analysis in Sports (ACM MMSports'22) is part of the ACM International Conference on Multimedia 2022 (ACM Multimedia 2022). After two years of pure virtual MMSports workshops due to COVID-19, MMSports'22 is held on-site again. The goal of this workshop is to bring together researchers and practitioners from academia and industry to address challenges and report progress in mining, analyzing, understanding, and visualizing multimedia/multimodal data in sports, sports broadcasts, sports games and sports medicine. The combination of sports and modern technology offers a novel and intriguing field of research with promising approaches for visual broadcast augmentation and understanding, for statistical analysis and evaluation, and for sensor fusion during workouts as well as competitions. There is a lack of research communities focusing on the fusion of multiple modalities. We are helping to close this research gap with this workshop series on multimedia content analysis in sports. Related Workshop Proceedings are available in the ACM DL at: https://dl.acm.org/doi/proceedings/10.1145/3552437. © 2022 Owner/Author.

17.
10th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos, ACIIW 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2300062

ABSTRACT

Due to the steadily increasing digitalization and the lack of social contact during the Covid-19 pandemic, the workload, and stress of software developers increases. This may lead to psychological overwhelm, when negative emotions caused by heavy stress are not detected early enough to be treated effectively. Machine Learning made it possible to recognize emotions in human beings using physiological features automatically. Nonetheless, current research lacks of methods to detect psychological overwhelm in software developers during work early. Furthermore, means are necessary to react to such detection properly. In this research, we investigate the methods for enabling an automatic emotion regulation for psychological overwhelm of software developers using multimodal physiological sensors, Machine Learning and the qualitative inquiry method of Interpretative Phenomenological Analysis. The goal is to find solutions to improve the psychological well-being of software developers and the associated quality of software development through the use of emotion regulation techniques. Raising awareness of the problem of psychological overwhelm among software developers will lead to a more profound understanding of its impact on the overall quality of software development and the mental health of software developers. © 2022 IEEE.

18.
Mathematics ; 11(6), 2023.
Article in English | Scopus | ID: covidwho-2297226

ABSTRACT

Deep learning is a sub-discipline of artificial intelligence that uses artificial neural networks, a machine learning technique, to extract patterns and make predictions from large datasets. In recent years, it has achieved rapid development and is widely used in numerous disciplines with fruitful results. Learning valuable information from complex, high-dimensional, and heterogeneous biomedical data is a key challenge in transforming healthcare. In this review, we provide an overview of emerging deep-learning techniques, COVID-19 research involving deep learning, and concrete examples of deep-learning methods in COVID-19 diagnosis, prognosis, and treatment management. Deep learning can process medical imaging data, laboratory test results, and other relevant data to diagnose diseases and judge disease progression and prognosis, and even recommend treatment plans and drug-use strategies to accelerate drug development and improve drug quality. Furthermore, it can help governments develop proper prevention and control measures. We also assess the current limitations and challenges of deep learning in therapy precision for COVID-19, including the lack of phenotypically abundant data and the need for more interpretable deep-learning models. Finally, we discuss how current barriers can be overcome to enable future clinical applications of deep learning. © 2023 by the authors.

19.
16th ACM International Conference on Web Search and Data Mining, WSDM 2023 ; : 706-714, 2023.
Article in English | Scopus | ID: covidwho-2273720

ABSTRACT

Memes can be a useful way to spread information because they are funny, easy to share, and can spread quickly and reach further than other forms. With increased interest in COVID-19 vaccines, vaccination-related memes have grown in number and reach. Memes analysis can be difficult because they use sarcasm and often require contextual understanding. Previous research has shown promising results but could be improved by capturing global and local representations within memes to model contextual information. Further, the limited public availability of annotated vaccine critical memes datasets limit our ability to design computational methods to help design targeted interventions and boost vaccine uptake. To address these gaps, we present VaxMeme, which consists of 10,244 manually labelled memes. With VaxMeme, we propose a new multimodal framework designed to improve the memes' representation by learning the global and local representations of memes. The improved memes' representations are then fed to an attentive representation learning module to capture contextual information for classification using an optimised loss function. Experimental results show that our framework outperformed state-of-the-art methods with an F1-Score of 84.2%. We further analyse the transferability and generalisability of our framework and show that understanding both modalities is important to identify vaccine critical memes on Twitter. Finally, we discuss how understanding memes can be useful in designing shareable vaccination promotion, myth debunking memes and monitoring their uptake on social media platforms. © 2023 ACM.

20.
5th IEEE International Conference on Advances in Science and Technology, ICAST 2022 ; : 220-224, 2022.
Article in English | Scopus | ID: covidwho-2260500

ABSTRACT

This study presents a detailed survey of different works related to sentiment analysis. The COVID-19 pandemic and its impact on people's mental health act as the driving force behind this survey. The survey can help study sentiment analysis and approaches taken in many studies to detect human emotions via advanced technology. It can also help in improving present systems by finding loopholes and increasing their accuracy. Various lexicon and ML-based systems and models like Word2Vec and LSTM were studied in the surveyed papers. Some of the current and future directions highlighted were Twitter sentiment analysis, review-based market analysis, determining changing behavior and emotions in a given time period, and detecting the mental health of employees, and students. This survey provides details related to trends and topics in sentiment analysis and an in-depth understanding of various technologies used in different studies. It also gives an insight into the wide variety of applications related to sentiment analysis. © 2022 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL